826 research outputs found
Baroclinic Vorticity Production in Protoplanetary Disks; Part I: Vortex Formation
The formation of vortices in protoplanetary disks is explored via
pseudo-spectral numerical simulations of an anelastic-gas model. This model is
a coupled set of equations for vorticity and temperature in two dimensions
which includes baroclinic vorticity production and radiative cooling. Vortex
formation is unambiguously shown to be caused by baroclinicity because (1)
these simulations have zero initial perturbation vorticity and a nonzero
initial temperature distribution; and (2) turning off the baroclinic term halts
vortex formation, as shown by an immediate drop in kinetic energy and
vorticity. Vortex strength increases with: larger background temperature
gradients; warmer background temperatures; larger initial temperature
perturbations; higher Reynolds number; and higher resolution. In the
simulations presented here vortices form when the background temperatures are
and vary radially as , the initial vorticity
perturbations are zero, the initial temperature perturbations are 5% of the
background, and the Reynolds number is . A sensitivity study consisting
of 74 simulations showed that as resolution and Reynolds number increase,
vortices can form with smaller initial temperature perturbations, lower
background temperatures, and smaller background temperature gradients. For the
parameter ranges of these simulations, the disk is shown to be convectively
stable by the Solberg-H{\o}iland criteria.Comment: Originally submitted to The Astrophysical Journal April 3, 2006;
resubmitted November 3, 2006; accepted Dec 5, 200
size effect on fracture toughness of snow
Abstract: Depending on the scale of observation, many engineered and natural materials show different mechanical behaviour. Thus, size effect theories, based on a multiscale approach, analyse the intrinsic (due to microstructural constraints, e.g., grain size) and extrinsic effects (caused by dimensional constraints), in order to improve the knowledge in materials science and applied mechanics. Nevertheless, several problems regarding Solid Mechanics and Materials Science cannot be solved by conventional approaches, because of the complexity and uncertainty of materials proprieties, especially at different scales. For this reason, a simple model, capable of predicting a fracture toughness at different scale, has been developed and presented in this paper. This model is based on the Golden Ratio, which was firstly defined by Euclide as: "A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the less". Intimately interconnected with the Fibonacci sequence (1, 2, 3, 5, 8, 13, âŠ), this number controls growth in Nature and recurs in many disciplines, such as art, architecture, design, medicine, etc.., and for man-made and natural brittle materials, the Golden Ratio permits to define the relationship between the average crack spacing and the thickness of quasi-brittle materials. In these cases, the theoretical results provided by the Golden Ratio, used to calibrate a size-effect law of fracture toughness, are in accordance with the experimental measurements taken in several test campaigns carried on different materials (i.e., rocks, ice, and concrete). This paper presents the case of fracture toughness of snow, in which the irrational number 1.61803 recurs when the geometrical dimensions vary. This aspect is confirmed by the results of experimental campaigns performed on snow samples. Thus, we reveals the existence of the size-effect law of fracture toughness of snow and we argue that the centrality of the Golden Ratio in the fracture properties of quasi-brittle materials. Consequently, by means of the proposed model, the Kic of large samples can be simply and rapidly predicted, without knowing the material performances but by testing prototypes of the lower dimensions
Tracking power system events with accuracy-based PMU adaptive reporting rate
Fast dynamics and transient events are becoming more and more frequent in power systems, due to the high penetration of renewable energy sources and the consequent lack of inertia. In this scenario, Phasor Measurement Units (PMUs) are expected to track the monitored quantities. Such functionality is related not only to the PMU accuracy (as per the IEC/IEEE 60255-118-1 standard) but also to the PMU reporting rate (RR). High RRs allow tracking fast dynamics, but produce many redundant measurement data in normal conditions. In view of an effective tradeoff, the present paper proposes an adaptive RR mechanism based on a real-time selection of the measurements, with the target of preserving the information content while reducing the data rate. The proposed method has been tested considering real-world datasets and applied to four different PMU algorithms. The results prove the method effectiveness in reducing the average data throughput as well as its scalability at PMU concentrator or storage level
Treatment of atherosclerotic renovascular hypertension: review of observational studies and a meta-analysis of randomized clinical trials.
open9Atherosclerotic renal artery stenosis can cause ischaemic nephropathy and arterial hypertension. We herein review the observational and randomized clinical trials (RCTs) comparing medical and endovascular treatment for control of hypertension and renal function preservation. Using the Population Intervention Comparison Outcome (PICO) strategy, we identified the relevant studies and performed a novel meta-analysis of all RCTs to determine the efficacy and safety of endovascular treatment when compared with medical therapy. The following outcomes were examined: baseline follow-up difference in mean systolic and diastolic blood pressure (BP), serum creatinine, number of drugs at follow-up, incident events (heart failure, stroke, and worsening renal function), mortality, cumulative relative risk of heart failure, stroke, and worsening renal function. Seven studies comprising a total of 2155 patients (1741 available at follow-up) were considered, including the recently reported CORAL Study. Compared with baseline, diastolic BP fell more at follow-up in patients in the endovascular than in the medical treatment arm (standard difference in means -0.21, 95% confidence interval (CI): -0.342 to -0.078, P = 0.002) despite a greater reduction in the mean number of antihypertensive drugs (standard difference in means -0.201, 95% CI: -0.302 to -0.1, P < 0.001). At variance, follow-up changes (from baseline) of systolic BP, serum creatinine, and incident cardiovascular event rates did not differ between treatment arms. Thus, patients with atherosclerotic renal artery stenosis receiving endovascular treatment required less anti-antihypertensive drugs at follow-up than those medically treated. Notwithstanding this, they evidenced a better control of diastolic BP.openopenCaielli P;Frigo AC;Pengo MF;Rossitto G;Maiolino G;Seccia TM;CalĂČ LA;Miotto D;Rossi GPCaielli, P; Frigo, ANNA CHIARA; Pengo, Mf; Rossitto, G; Maiolino, G; Seccia, TERESA MARIA; CalĂČ, La; Miotto, Diego; Rossi, Gianpaol
A Reverse Dynamical Investigation of the Catastrophic Wood-Snow Avalanche of 18 January 2017 at Rigopiano, Gran Sasso National Park, Italy
On 18 January 2017 a catastrophic avalanche destroyed the Rigopiano Gran Sasso Resort & Wellness (Rigopiano Hotel) in the Gran Sasso National Park in Italy, with 40 people trapped and a death toll of 29. This article describes the location of the disaster and the general meteorological scenario, with field investigations to provide insight on the avalanche dynamics and its interaction with the hotel buildings. The data gathered in situ suggest that the avalanche was a fluidized dry snow avalanche, which entrained a sligthtly warmer snow cover along the path and reached extremely long runout distances with braking effect from mountain forests. The avalanche that reached the Rigopiano area was a âwood-snowâ avalancheâa mixture of snow and uprooted and crushed trees, rocks, and other debris. There were no direct eyewitnesses at the event, and a quick post-event survey used a numerical model to analyze the dynamics of the event to estimate the pressure, velocity, and direction of the natural flow and the causes for the destruction of the hotel. Considering the magnitude and the damage caused by the event, the avalanche was at a high to very high intensity scale
The Domain Chaos Puzzle and the Calculation of the Structure Factor and Its Half-Width
The disagreement of the scaling of the correlation length xi between
experiment and the Ginzburg-Landau (GL) model for domain chaos was resolved.
The Swift-Hohenberg (SH) domain-chaos model was integrated numerically to
acquire test images to study the effect of a finite image-size on the
extraction of xi from the structure factor (SF). The finite image size had a
significant effect on the SF determined with the Fourier-transform (FT) method.
The maximum entropy method (MEM) was able to overcome this finite image-size
problem and produced fairly accurate SFs for the relatively small image sizes
provided by experiments.
Correlation lengths often have been determined from the second moment of the
SF of chaotic patterns because the functional form of the SF is not known.
Integration of several test functions provided analytic results indicating that
this may not be a reliable method of extracting xi. For both a Gaussian and a
squared SH form, the correlation length xibar=1/sigma, determined from the
variance sigma^2 of the SF, has the same dependence on the control parameter
epsilon as the length xi contained explicitly in the functional forms. However,
for the SH and the Lorentzian forms we find xibar ~ xi^1/2.
Results for xi determined from new experimental data by fitting the
functional forms directly to the experimental SF yielded xi ~ epsilon^-nu} with
nu ~= 1/4 for all four functions in the case of the FT method, but nu ~= 1/2,
in agreement with the GL prediction, in the the case of the MEM. Over a wide
range of epsilon and wave number k, the experimental SFs collapsed onto a
unique curve when appropriately scaled by xi.Comment: 15 pages, 26 figures, 1 tabl
The nature and evolution of Nova Cygni 2006
AIMS: Nova Cyg 2006 has been intensively observed throughout its full
outburst. We investigate the energetics and evolution of the central source and
of the expanding ejecta, their chemical abundances and ionization structure,
and the formation of dust. METHOD: We recorded low, medium, and/or
high-resolution spectra (calibrated into accurate absolute fluxes) on 39
nights, along with 2353 photometric UBVRcIc measures on 313 nights, and
complemented them with IR data from the literature. RESULTS: The nova displayed
initially the normal photometric and spectroscopic evolution of a fast nova of
the FeII-type. Pre-maximum, principal, diffuse-enhanced, and Orion absorption
systems developed in a normal way. After the initial outburst, the nova
progressively slowed its fading pace until the decline reversed and a second
maximum was reached (eight months later), accompanied by large spectroscopic
changes. Following the rapid decline from second maximum, the nova finally
entered the nebular phase and formed optically thin dust. We computed the
amount of formed dust and performed a photo-ionization analysis of the
emission-line spectrum during the nebular phase, which showed a strong
enrichment of the ejecta in nitrogen and oxygen, and none in neon, in agreement
with theoretical predictions for the estimated 1.0 Msun white dwarf in Nova Cyg
2006. The similarities with the poorly investigated V1493 Nova Aql 1999a are
discussed.Comment: in press in Astronomy and Astrophysic
ROMA: a map-making algorithm for polarised CMB data sets
We present ROMA, a parallel code to produce joint optimal temperature and
polarisation maps out of multidetector CMB observations. ROMA is a fast,
accurate and robust implementation of the iterative generalised least squares
approach to map-making. We benchmark ROMA on realistic simulated data from the
last, polarisation sensitive, flight of BOOMERanG.Comment: Accepted for publication in Astronomy & Astrophysics. Version with
higher quality figures available at http://www.fisica.uniroma2.it/~cosmo/ROM
Processor Allocation for Optimistic Parallelization of Irregular Programs
Optimistic parallelization is a promising approach for the parallelization of
irregular algorithms: potentially interfering tasks are launched dynamically,
and the runtime system detects conflicts between concurrent activities,
aborting and rolling back conflicting tasks. However, parallelism in irregular
algorithms is very complex. In a regular algorithm like dense matrix
multiplication, the amount of parallelism can usually be expressed as a
function of the problem size, so it is reasonably straightforward to determine
how many processors should be allocated to execute a regular algorithm of a
certain size (this is called the processor allocation problem). In contrast,
parallelism in irregular algorithms can be a function of input parameters, and
the amount of parallelism can vary dramatically during the execution of the
irregular algorithm. Therefore, the processor allocation problem for irregular
algorithms is very difficult.
In this paper, we describe the first systematic strategy for addressing this
problem. Our approach is based on a construct called the conflict graph, which
(i) provides insight into the amount of parallelism that can be extracted from
an irregular algorithm, and (ii) can be used to address the processor
allocation problem for irregular algorithms. We show that this problem is
related to a generalization of the unfriendly seating problem and, by extending
Tur\'an's theorem, we obtain a worst-case class of problems for optimistic
parallelization, which we use to derive a lower bound on the exploitable
parallelism. Finally, using some theoretically derived properties and some
experimental facts, we design a quick and stable control strategy for solving
the processor allocation problem heuristically.Comment: 12 pages, 3 figures, extended version of SPAA 2011 brief announcemen
- âŠ